14 research outputs found

    An Outlier Exposure Approach to Improve Visual Anomaly Detection Performance for Mobile Robots

    Full text link
    We consider the problem of building visual anomaly detection systems for mobile robots. Standard anomaly detection models are trained using large datasets composed only of non-anomalous data. However, in robotics applications, it is often the case that (potentially very few) examples of anomalies are available. We tackle the problem of exploiting these data to improve the performance of a Real-NVP anomaly detection model, by minimizing, jointly with the Real-NVP loss, an auxiliary outlier exposure margin loss. We perform quantitative experiments on a novel dataset (which we publish as supplementary material) designed for anomaly detection in an indoor patrolling scenario. On a disjoint test set, our approach outperforms alternatives and shows that exposing even a small number of anomalous frames yields significant performance improvements

    Challenges in Visual Anomaly Detection for Mobile Robots

    Full text link
    We consider the task of detecting anomalies for autonomous mobile robots based on vision. We categorize relevant types of visual anomalies and discuss how they can be detected by unsupervised deep learning methods. We propose a novel dataset built specifically for this task, on which we test a state-of-the-art approach; we finally discuss deployment in a real scenario.Comment: Workshop paper presented at the ICRA 2022 Workshop on Safe and Reliable Robot Autonomy under Uncertainty https://sites.google.com/umich.edu/saferobotautonomy/hom

    Path planning for mobile robots in the real world: handling multiple objectives, hierarchical structures and partial information

    Get PDF
    Autonomous robots in real-world environments face a number of challenges even to accomplish apparently simple tasks like moving to a given location. We present four realistic scenarios in which robot navigation takes into account partial information, hierarchical structures, and multiple objectives. We start by discussing navigation in indoor environments shared with people, where routes are characterized by effort, risk, and social impact. Next, we improve navigation by computing optimal trajectories and implementing human-friendly local navigation behaviors. Finally, we move to outdoor environments, where robots rely on uncertain traversability estimations and need to account for the risk of getting stuck or having to change route

    Learning an Image-based Obstacle Detector With Automatic Acquisition of Training Data

    No full text
    We detect and localize obstacles in front of a mobile robot by means of a deep neural network that maps images acquired from a forward-looking camera to the outputs of five proximity sensors. The robot autonomously acquires training data in multiple environments; once trained, the network can detect obstacles and their position also in unseen scenarios, and can be used on different robots, not equipped with proximity sensors. We demonstrate both the training and deployment phases on a small modified Thymio robot

    Mighty Thymio for University-Level Educational Robotics

    No full text
    Thymio is a small, inexpensive, mass-produced mobile robot with widespread use in primary and secondary education. In order to make it more versatile and effectively use it in later educational stages, including university levels, we have expanded Thymio's capabilities by adding off-the-shelf hardware and open software components. The resulting robot, that we call Mighty Thymio, provides additional sensing functionalities, increased computing power, networking, and full ROS integration. We present the architecture of Mighty Thymio and show its application in advanced educational activities

    Realtime Generation of Audible Textures Inspired by a Video Stream

    No full text
    We showcase a model to generate a soundscape from a camera stream in real time. The approach relies on a training video with an associated meaningful audio track; a granular synthesizer generates a novel sound by randomly sampling and mixing audio data from such video, favoring timestamps whose frame is similar to the current camera frame; the semantic similarity between frames is computed by a pretrained neural network. The demo is interactive: a user points a mobile phone to different objects and hears how the generated sound changes

    Hazards&Robots: A dataset for visual anomaly detection in robotics

    No full text
    We propose Hazards&Robots, a dataset for Visual Anomaly Detection in Robotics. The dataset is composed of 324,408 RGB frames, and corresponding feature vectors; it contains 145,470 normal frames and 178,938 anomalous ones categorized in 20 different anomaly classes. The dataset can be used to train and test current and novel visual anomaly detection methods such as those based on deep learning vision models. The data is recorded with a DJI Robomaster S1 front facing camera. The ground robot, controlled by a human operator, traverses university corridors. Considered anomalies include presence of humans, unexpected objects on the floor, defects to the robot. Preliminary versions of the dataset are used in [1,3]. This version is available at [12
    corecore